15 research outputs found

    Prioritize the ordering of URL queue in Focused crawler

    No full text
    The enormous growth of the World Wide Web in recent years has made it necessary to perform resource discovery efficiently. For a crawler it is not an simple task to download the domain specific web pages. This unfocused approach often shows undesired results. Therefore, several new ideas have been proposed, among them a key technique is focused crawling which is able to crawl particular topical portions of the World Wide Web quickly without having to explore all web pages. Focused crawling is a technique which is able to crawled particular topics quickly and efficiently without exploring all WebPages. The proposed approach does not only use keywords for the crawl, but rely on high-level background knowledge with concepts and relations, which are compared with the texts of the searched page. In this paper a combined crawling strategy is proposed that integrates the link analysis algorithm with association metric. An approach is followed to find out the relevant pages before the process of crawling and to prioritizing the URL queue for downloading higher relevant pages, to an optimal level based on domain dependent ontology. This strategy make use of ontology to estimate the semantic contents of the URL without exploring which in turn strengthen the ordering metric for URL queue and leads to the retrieval of most relevant pages

    Deep Learning in Big Data, Image, and Signal Processing in the Modern Digital Age

    No full text
    Data, such as images and signals, are constantly generated from various industries, including the internet [...

    Smart Architecture Energy Management through Dynamic Bin-Packing Algorithms on Cloud

    No full text
    Smart Home Architecture is suitable for progressive and symmetric urbanization. Data being generated in smart home appliances using internet of things should be stored in cloud where computing resources can analyze the data and generate the decisive pattern within no time. This additional requirement of storage, majorly, comprising of unfiltered data escalates requirement of host machines which carries with itself extra overhead of energy consumption; thus, extra cost has to be beard by service providers. Various static algorithms are already proposed to improve energy management of cloud data centers by reducing number of active bins. These algorithms are not able to cater to the needs of present heterogeneous requests generated in cloud machines by people of diversified work environment with adhering to the requirements of quality parameters. Therefore, the paper has proposed and implemented dynamic bin-packing approaches for smart architecture that can significantly reduce energy consumption without compromising upon makespan, resource utilization and Quality of Service (QoS) parameters. The novelty of the proposed dynamic approaches in comparison to the existing static approaches is that the proposed approach dynamically creates and dissolves virtual machines as per incoming and completed requests which is a dire need of present computing paradigms via attachment of time-frame with each virtual machine. The simulations have been performed on JAVA platform and dynamic energy utilized-best fit decreasing bin packing technique has produced better results in maximum runs

    Multi-Class Skin Lesion Classification Using a Lightweight Dynamic Kernel Deep-Learning-Based Convolutional Neural Network

    No full text
    Skin is the primary protective layer of the internal organs of the body. Nowadays, due to increasing pollution and multiple other factors, various types of skin diseases are growing globally. With variable shapes and multiple types, the classification of skin lesions is a challenging task. Motivated by this spreading deformity in society, a lightweight and efficient model is proposed for the highly accurate classification of skin lesions. Dynamic-sized kernels are used in layers to obtain the best results, resulting in very few trainable parameters. Further, both ReLU and leakyReLU activation functions are purposefully used in the proposed model. The model accurately classified all of the classes of the HAM10000 dataset. The model achieved an overall accuracy of 97.85%, which is much better than multiple state-of-the-art heavy models. Further, our work is compared with some popular state-of-the-art and recent existing models

    Deep Learning Model for the Detection of Real Time Breast Cancer Images Using Improved Dilation-Based Method

    No full text
    Breast cancer can develop when breast cells replicate abnormally. It is now a worldwide issue that concerns people’s safety all around the world. Every day, women die from breast cancer, which is especially common in the United States. Mammography, CT, MRI, ultrasound, and biopsies may all be used to detect breast cancer. Histopathology (biopsy) is often carried out to examine the image and discover breast cancer. Breast cancer detection at an early stage saves lives. Deep and machine learning models aid in the detection of breast cancer. The aim of the research work is to encourage medical research and the development of technology by employing deep learning models to recognize cancer cells that are small in size. For histological annotation and diagnosis, the proposed technique makes use of the BreCaHAD dataset. Color divergence is caused by differences in slide scanners, staining procedures, and biopsy materials. To avoid overfitting, we used data augmentation with 19 factors, such as scale, rotation, and gamma. The proposed hybrid dilation deep learning model is of two sorts. It illustrates edges, curves, and colors, and it improves the key traits. It utilizes dilation convolution and max pooling for multi-scale information. The proposed dilated unit processes the image and sends the processed features to the Alexnet, and it can recognize minute objects and thin borders by using the dilated residual expanding kernel model. An AUC of 96.15 shows that the new strategy is better than the old one

    Deep Learning-Based Computer-Aided Pneumothorax Detection Using Chest X-ray Images

    No full text
    Pneumothorax is a thoracic disease leading to failure of the respiratory system, cardiac arrest, or in extreme cases, death. Chest X-ray (CXR) imaging is the primary diagnostic imaging technique for the diagnosis of pneumothorax. A computerized diagnosis system can detect pneumothorax in chest radiographic images, which provide substantial benefits in disease diagnosis. In the present work, a deep learning neural network model is proposed to detect the regions of pneumothoraces in the chest X-ray images. The model incorporates a Mask Regional Convolutional Neural Network (Mask RCNN) framework and transfer learning with ResNet101 as a backbone feature pyramid network (FPN). The proposed model was trained on a pneumothorax dataset prepared by the Society for Imaging Informatics in Medicine in association with American college of Radiology (SIIM-ACR). The present work compares the operation of the proposed MRCNN model based on ResNet101 as an FPN with the conventional model based on ResNet50 as an FPN. The proposed model had lower class loss, bounding box loss, and mask loss as compared to the conventional model based on ResNet50 as an FPN. Both models were simulated with a learning rate of 0.0004 and 0.0006 with 10 and 12 epochs, respectively

    Modified U-NET Architecture for Segmentation of Skin Lesion

    No full text
    Dermoscopy images can be classified more accurately if skin lesions or nodules are segmented. Because of their fuzzy borders, irregular boundaries, inter- and intra-class variances, and so on, nodule segmentation is a difficult task. For the segmentation of skin lesions from dermoscopic pictures, several algorithms have been developed. However, their accuracy lags well behind the industry standard. In this paper, a modified U-Net architecture is proposed by modifying the feature map’s dimension for an accurate and automatic segmentation of dermoscopic images. Apart from this, more kernels to the feature map allowed for a more precise extraction of the nodule. We evaluated the effectiveness of the proposed model by considering several hyper parameters such as epochs, batch size, and the types of optimizers, testing it with augmentation techniques implemented to enhance the amount of photos available in the PH2 dataset. The best performance achieved by the proposed model is with an Adam optimizer using a batch size of 8 and 75 epochs

    U-Net Model with Transfer Learning Model as a Backbone for Segmentation of Gastrointestinal Tract

    No full text
    The human gastrointestinal (GI) tract is an important part of the body. According to World Health Organization (WHO) research, GI tract infections kill 1.8 million people each year. In the year 2019, almost 5 million individuals were detected with gastrointestinal disease. Radiation therapy has the potential to improve cure rates in GI cancer patients. Radiation oncologists direct X-ray beams at the tumour while avoiding the stomach and intestines. The current objective is to direct the X-ray beam toward the malignancy while avoiding the stomach and intestines in order to improve dose delivery to the tumour. This study offered a technique for segmenting GI tract organs (small bowel, big intestine, and stomach) to assist radio oncologists to treat cancer patients more quickly and accurately. The suggested model is a U-Net model designed from scratch and used for the segmentation of a small size of images to extract the local features more efficiently. Furthermore, in the proposed model, six transfer learning models were employed as the backbone of the U-Net topology. The six transfer learning models used are Inception V3, SeResNet50, VGG19, DenseNet121, InceptionResNetV2, and EfficientNet B0. The suggested model was analysed with model loss, dice coefficient, and IoU. The results specify that the suggested model outperforms all transfer learning models, with performance parameter values as 0.122 model loss, 0.8854 dice coefficient, and 0.8819 IoU
    corecore